Session 1-G

Edge Computing I

Conference
2:00 PM — 3:30 PM EDT
Local
Jul 7 Tue, 2:00 PM — 3:30 PM EDT

Coded Edge Computing

Kwang Taik Kim (Purdue University, USA); Carlee Joe-Wong (Carnegie Mellon University, USA); Mung Chiang (Purdue University, USA)

5
Running intensive compute tasks across a network of edge devices introduces novel distributed computing challenges: edge devices are heterogeneous in the compute, storage, and communication capabilities; and can exhibit unpredictable straggler effects and failures. In this work, we propose a novel error-correcting-code inspired strategy to execute computing tasks in edge computing environments, which is designed to mitigate response time and error variability caused by edge devices' heterogeneity and lack of reliability. Unlike prior coding approaches, we incorporate partially unfinished coded tasks into our computation recovery, which allows us to achieve smooth performance degradation with low-complexity decoding when the coded tasks are run on edge devices with a fixed deadline. By further carrying out coding on edge devices as well as a master node, the proposed computing also alleviates communication bottlenecks during data shuffling and is amenable to distributed implementation in a highly variable and limited network. Such distributed encoding forces us to solve new decoding challenges. Using a representative implementation based on federated multi-task learning frameworks, extensive performance simulations are carried out, which demonstrates that the proposed strategy offers significant gains in latency and accuracy over conventional coded computing schemes.

HotDedup: Managing Hot Data Storage at Network Edge through Optimal Distributed Deduplication

Shijing Li and Tian Lan (George Washington University, USA)

2
The rapid growth of computing capabilities at network edge calls for efficient management frameworks that not only considers placing hot data on edge storage for best accessibility and performance, but also makes optimal utilization of edge storage space. In this paper, we solve a joint optimization problem by exploiting both data popularity (for optimal data access performance) and data similarity (for optimal storage space efficiency). We show that the proposed optimization is NP-hard and develop an algorithm by (i) making novel use of delta-similarity graph to capture pairwise data similarity and (ii) leveraging the k-MST algorithm to solve a Prize Collecting Steiner Tree problem on the graph. The proposed algorithm is prototyped using an open-source distributed storage system, Cassandra. We evaluate its performance extensively on a real-world testbed and with respect to real-world IoT datasets. The algorithm is shown to achieve over 55% higher edge service rate and reduces request response time by about 30%.

Joint Configuration Adaptation and Bandwidth Allocation for Edge-based Real-time Video Analytics

Can Wang, Sheng Zhang, Yu Chen and Zhuzhong Qian (Nanjing University, China); Jie Wu (Temple University, USA); Mingjun Xiao (University of Science and Technology of China, China)

6
Real-time analytics on video data demands intensive computation resources and high energy consumption. Traditional cloud-based video analytics relies on large centralized clusters to ingest video streams. With edge computing, we can offload compute-intensive analysis tasks to the nearby server, thus mitigating long latency incurred by data transmission via wide area networks. When offloading frames from the front-end device to the edge server, the application configuration (frame sampling rate and frame resolution) will impact several metrics, such as energy consumption, analytics accuracy and user-perceived latency. In this paper, we study the configuration adaption and bandwidth allocation for multiple video streams, which are connected to the same edge node sharing an upload link. We propose an efficient online algorithm, called JCAB, which jointly optimizes configuration adaption and bandwidth allocation to address a number of key challenges in edge-based video analytics systems, including edge capacity limitation, unknown network variation, intrusive dynamics of video contents. Our algorithm is developed based on Lyapunov optimization and Markov approximation, works online without requiring future information, and achieves a provable performance bound. Simulation results show that JCAB can effectively balance the analytics accuracy and energy consumption while keeping low system latency.

Latency-aware VNF Chain Deployment with Efficient Resource Reuse at Network Edge

Panpan Jin (Huazhong University of Science and Technology, China); Xincai Fei (Huazhong University of Science & Technology, China); Qixia Zhang and Fangming Liu (Huazhong University of Science and Technology, China); Bo Li (Hong Kong University of Science and Technology, Hong Kong)

4
With the increasing demand of low-latency network services, mobile edge computing (MEC) emerges as a new paradigm, which provides server resources and processing capacities in close proximity to end users. Based on network function virtualization (NFV), network services can be flexibly provisioned as virtual network function (VNF) chains deployed at edge servers. However, due to the resource shortage at the network edge, how to efficiently deploy VNF chains with latency guarantees and resource efficiency remains as a challenging problem. In this work, we focus on jointly optimizing the resource utilization of both edge servers and physical links under the latency limitations. Specifically, we formulate the VNF chain deployment problem as a mixed integer linear programming (MILP) to minimize the total resource consumption. We design a novel two-stage latency-aware VNF deployment scheme: highlighted by a constrained depth-first search algorithm (CDFSA) for selecting paths, and a path-based greedy algorithm (PGA) for assigning VNFs by reusing as many VNFs as possible. We demonstrate that our proposed algorithm can efficiently achieve a near-optimal solution with a theoretically-proved worst-case performance bound. Extensive simulation results show that the proposed algorithm outperforms three previous heuristic algorithms.

Session Chair

Serge Fdida (Sorbonne University)

Session 2-G

Caching I

Conference
4:00 PM — 5:30 PM EDT
Local
Jul 7 Tue, 4:00 PM — 5:30 PM EDT

Exploring the interplay between CDN caching and video streaming performance

Ehab Ghabashneh and Sanjay Rao (Purdue University, USA)

2
Content Delivery Networks (CDNs) are critical for optimizing Internet video delivery. In this paper, we characterize how CDNs serve video content, and the implications for video performance, especially emerging 4K video streaming. Our work is based on measurements of multiple well known video publishers served by top CDNs. Our results show that (i) video chunks in a session often see heterogeneous behavior in terms of whether they hit in the CDN, and which layer they are served from; and (ii) application throughput can vary significantly even across chunks in a session based on where they are served from. The differences while sensitive to client location and CDN can sometimes be significant enough to impact the viability of 4K streaming. We consider the implications for Adaptive Bit Rate (ABR) algorithms given they rely on throughput prediction, and are agnostic to whether objects hit in the CDN and where. We evaluate the potential benefits of exposing where a video chunk is served from to the client ABR algorithm in the context of the widely studied MPC algorithm. Emulation experiments show the approach has the potential to reduce prediction inaccuracies, and enhance video streaming performance.

Similarity Caching: Theory and Algorithms

Michele Garetto (Università di Torino, Italy); Emilio Leonardi (Politecnico di Torino, Italy); Giovanni Neglia (Inria, France)

0
This paper focuses on similarity caching systems, in which a user request for an object o that is not in the cache can be (partially) satisfied by a similar stored object o', at the cost of a loss of user utility. Similarity caching systems can be effectively employed in several application areas, like multimedia retrieval, recommender systems, genome study, and machine learning training/serving. However, despite their relevance, the behavior of such systems is far from being well understood. In this paper, we provide a first comprehensive analysis of similarity caching in the offline, adversarial, and stochastic settings. We show that similarity caching raises significant new challenges, for which we propose the first dynamic policies with some optimality guarantees. We evaluate the performance of our schemes under both synthetic and real request traces.

T-cache: Dependency-free Ternary Rule Cache for Policy-based Forwarding

Ying Wan (Tsinghua University, China); Haoyu Song (Futurewei Technologies, USA); Yang Xu (Fudan University, China); Yilun Wang (Tsinghua University, China); Tian Pan (Beijing University of Posts and Telecommunications, China); Chuwen Zhang and Bin Liu (Tsinghua University, China)

0
Ternary Content Addressable Memory (TCAM) is widely used by modern routers and switches to support policy-based forwarding. However, the limited TCAM capacity does not scale with the ever-increasing rule table size. Using TCAM just as a rule cache is a plausible solution, but one must resolve several tricky issues including the rule dependency and the associated TCAM updates. In this paper we propose a new approach which can generate dependency-free rules to cache. By removing the rule dependency, the TCAM update problem also disappears. We provide the complete T-cache system design including slow path processing and cache replacement. Evaluations based on real-wold and synthesized rule tables and traces show that T-cache is efficient and robust for network traffic in various scenarios.

Universally Stable Cache Networks

Yuanyuan Li and Stratis Ioannidis (Northeastern University, USA)

0
We consider a cache network in which intermediate nodes equipped with caches can serve content requests. We model this network as a universally stable queuing system, in which packets carrying identical responses are consolidated before being forwarded downstream. We refer to resulting queues as M/M/1c or counting queues, as consolidated packets carry a counter indicating the packet's multiplicity. Cache networks comprising such queues are hard to analyze; we propose two approximations: one via M/M/∞ queues, and one based on M/M/1c queues under the assumption of Poisson arrivals. We show that, in both cases, the problem of jointly determining (a) content placements and (b) service rates admits a poly-time, 1-1/e approximation algorithm. Numerical evaluations indicate that both approximations yield good solutions in practice, significantly outperforming competitors.

Session Chair

Aaron D Striegel (University of Notre Dame)

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.